From experiments to articulatory motion - a three dimensional talking head model
نویسندگان
چکیده
The goal of this study is to develop a customised computer model that can accurately represent the motion of vocal articulators during vowels and consonants. Models of the articulators were constructed as Finite Element (FE) meshes based on digitised high-resolution MRI (Magnetic Resonance Imaging) scans obtained during quiet breathing. Articulatory kinematics during speaking were obtained by EMA (Electromagnetic Articulography) and video of the face. The movement information thus acquired was applied to the FE model to provide jaw motion, modeled as a rigid body, and tongue, cheek and lip movements modeled with a free-form deformation technique. The motion of the epiglottis has also been considered in the model.
منابع مشابه
Towards an Audiovisual Virtual Talking Head: 3d Articulatory Modeling of Tongue, Lips and Face Based on Mri and Video Images
A linear three-dimensional articulatory model of tongue, lips and face is presented. The model is based on a linear component analysis of the 3D coordinates defining the geometry of the different organs, obtained from Magnetic Resonance Imaging of the tongue, and from front and profile video images of the subject’s face marked with small beads. In addition to a common jaw height parameter, the ...
متن کاملArticulatory synthesis using corpus-based estimation of line spectrum pairs
An attempt to define a new articulatory synthesis method, in which the speech signal is generated through a statistical estimation of its relation with articulatory parameters, is presented. A corpus containing acoustic material and simultaneous recordings of the tongue and facial movements was used to train and test the articulatory synthesis of VCV words and short sentences. Tongue and facial...
متن کاملArticulatory synthesis using corpus-based e
An attempt to define a new articulatory synthesis method, in which the speech signal is generated through a statistical estimation of its relation with articulatory parameters, is presented. A corpus containing acoustic material and simultaneous recordings of the tongue and facial movements was used to train and test the articulatory synthesis of VCV words and short sentences. Tongue and facial...
متن کاملTransforming an embodied conversational agent into an efficient talking head: from keyframe-based animation to multimodal concatenation synthesis
BACKGROUND Virtual humans have become part of our everyday life (movies, internet, and computer games). Even though they are becoming more and more realistic, their speech capabilities are, most of the time, limited and not coherent and/or not synchronous with the corresponding acoustic signal. METHODS We describe a method to convert a virtual human avatar (animated through key frames and int...
متن کاملPhoneme-level articulatory animation in pronunciation training
Speech visualization is extended to use animated talking heads for computer assisted pronunciation training. In this paper, we design a data-driven 3D talking head system for articulatory animations with synthesized articulator dynamics at the phoneme level. A database of AG500 EMA-recordings of three-dimensional articulatory movements is proposed to explore the distinctions of producing the so...
متن کامل